Perceptron Mistake Bounds
نویسندگان
چکیده
We present a brief survey of existing mistake bounds and introduce novel bounds for the Perceptron or the kernel Perceptron algorithm. Our novel bounds generalize beyond standard margin-loss type bounds, allow for any convex and Lipschitz loss function, and admit a very simple proof.
منابع مشابه
A New Perspective on an Old Perceptron Algorithm
We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptron-style update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for our algorithm. As a byproduct we obtain a new mistake bound for the Perceptron algorithm in the inseparable case. We describe a multiclass extension of the algorithm. This exten...
متن کاملThe Structured Weighted Violations Perceptron Algorithm
We present the Structured Weighted Violations Perceptron (SWVP) algorithm, a new structured prediction algorithm that generalizes the Collins Structured Perceptron (CSP, (Collins, 2002)). Unlike CSP, the update rule of SWVP explicitly exploits the internal structure of the predicted labels. We prove the convergence of SWVP for linearly separable training sets, provide mistake and generalization...
متن کاملPrediction on a Graph with a Perceptron
We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We address both label noise and concept noise. Graph learning is framed as an instance of prediction on a finite set. To treat label noise we show that the hinge loss bounds derived by Gentile [1] for online perceptron learning can be transformed to relative mistake bounds with an optimal leading const...
متن کاملWorst-Case Analysis of the Perceptron and Exponentiated Update Algorithms
The absolute loss is the absolute difference between the desired and predicted outcome. This paper demonstrates worst-case upper bounds on the absolute loss for the Perceptron learning algorithm and the Exponentiated Update learning algorithm, which is related to the Weighted Majority algorithm. The bounds characterize the behavior of the algorithms over any sequence of trials, where each trial...
متن کاملThe Adviceptron: Giving Advice To The Perceptron
We propose a novel approach for incorporating prior knowledge into the perceptron. The goal is to update the hypothesis taking into account both label feedback and prior knowledge, in the form of soft polyhedral advice, so as to make increasingly accurate predictions on subsequent rounds. Advice helps speed up and bias learning so that good generalization can be obtained with less data. The upd...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1305.0208 شماره
صفحات -
تاریخ انتشار 2013